Skip to content

Conversation

@tirupath-qti
Copy link
Contributor

Description

Add support for the FusedMatMul operator in the QNN execution provider.
FusedMatMul is a contrib operator in the Microsoft domain that performs
a fused matrix multiplication with optional bias addition and activation.

Implementation details:

  • Added FusedMatMulOpBuilder class that decomposes FusedMatMul into:
    1. MatMul operation
    2. Optional bias addition
    3. Optional activation (Relu, Sigmoid, Tanh, Gelu)
  • Handles various attributes: transA, transB, alpha, and activation
  • Supports higher rank tensors and different data types

Added comprehensive tests:

  • Basic functionality tests with various configurations
  • Tests for both CPU and HTP backends
  • QDQ (Quantize-Dequantize) tests for 8-bit and 16-bit precision

Motivation and Context

Since QNN HTP doesn't support, decomposing it into QNN HTP supported operators to improve the inference time of customer models having FusedMatMul operator.

 Add support for the FusedMatMul operator in the QNN execution provider.
 FusedMatMul is a contrib operator in the Microsoft domain that performs
 a fused matrix multiplication with optional bias addition and activation.

Implementation details:
- Added FusedMatMulOpBuilder class that decomposes FusedMatMul into:
  1. MatMul operation
  2. Optional bias addition
  3. Optional activation (Relu, Sigmoid, Tanh, Gelu)
- Handles various attributes: transA, transB, alpha, and activation
- Supports higher rank tensors and different data types

Added comprehensive tests:
- Basic functionality tests with various configurations
- Tests for both CPU and HTP backends
- QDQ (Quantize-Dequantize) tests for 8-bit and 16-bit precision
@tirupath-qti
Copy link
Contributor Author

@edgchen1 and @yuslepukhin
If possible, can we get this reviewed and tracked for 1.24 release. This is needed for enabling one customer model.

@adrianlizarraga
Copy link
Contributor

/azp run Linux QNN CI Pipeline,Windows ARM64 QNN CI Pipeline, Win_TRT_Minimal_CUDA_Test_CI, Windows GPU Doc Gen CI Pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 4 pipeline(s).

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds support for the FusedMatMul operator in the QNN execution provider by decomposing it into QNN-supported operations (MatMul, optional transpose, and optional alpha scaling).

Changes:

  • Added FusedMatMulOpBuilder class that decomposes FusedMatMul into MatMul with optional batch transpose and alpha scaling operations
  • Added comprehensive test coverage for FusedMatMul on both CPU and HTP backends, including QDQ tests
  • Modified existing context tests to use FusedGemm instead of FusedMatMul (unrelated to the main purpose)

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
onnxruntime/test/providers/qnn/qnn_ep_context_test.cc Changed test model from FusedMatMul to FusedGemm with adjusted tensor shapes
onnxruntime/test/providers/qnn/fused_matmul_op_test.cc New comprehensive test suite for FusedMatMul operator with various configurations
onnxruntime/test/contrib_ops/fused_matmul_op_test.cc Added QNN EP to exclusion list for existing tests
onnxruntime/core/providers/qnn/builder/opbuilder/fused_matmul_op_builder.cc New operator builder implementation decomposing FusedMatMul into QNN operations
onnxruntime/core/providers/qnn/builder/op_builder_factory.h Added function declaration for FusedMatMul builder
onnxruntime/core/providers/qnn/builder/op_builder_factory.cc Registered FusedMatMul operator builder

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@adrianlizarraga
Copy link
Contributor

Hi @tirupath-qti, it looks like there are some conflicts that must be resolved before being able to merge. Perhaps your branch needs to be synchronized with the latest changes in main.

@tirupath-qti
Copy link
Contributor Author

I resolved the conflicts from github interface and rebased the PR.

@adrianlizarraga
Copy link
Contributor

/azp run Linux QNN CI Pipeline,Windows ARM64 QNN CI Pipeline, Win_TRT_Minimal_CUDA_Test_CI, Windows GPU Doc Gen CI Pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 4 pipeline(s).

@adrianlizarraga adrianlizarraga merged commit 46e8d45 into microsoft:main Jan 20, 2026
91 of 93 checks passed
tianleiwu pushed a commit that referenced this pull request Jan 21, 2026
### Description
Add support for the FusedMatMul operator in the QNN execution provider.
 FusedMatMul is a contrib operator in the Microsoft domain that performs
a fused matrix multiplication with optional bias addition and
activation.

Implementation details:
- Added FusedMatMulOpBuilder class that decomposes FusedMatMul into:
  1. MatMul operation
  2. Optional bias addition
  3. Optional activation (Relu, Sigmoid, Tanh, Gelu)
- Handles various attributes: transA, transB, alpha, and activation
- Supports higher rank tensors and different data types

Added comprehensive tests:
- Basic functionality tests with various configurations
- Tests for both CPU and HTP backends
- QDQ (Quantize-Dequantize) tests for 8-bit and 16-bit precision

### Motivation and Context
Since QNN HTP doesn't support, decomposing it into QNN HTP supported
operators to improve the inference time of customer models having
FusedMatMul operator.

(cherry picked from commit 46e8d45)
tianleiwu added a commit that referenced this pull request Jan 23, 2026
### Description
This PR cherry-picks the following changes for the 1.24.0 release.

### Cherry-picked Commits
| Commit | Commit Title | Author |
|---|---|---|
| 744e7fe | Add type definitions, registration, utilities for
INT2/UINT2 support (#26824) | vraspar |
| 530a1fb | [QNN EP] Add BFloat16 dtype support in QNN EP (#26987) |
tirupath-qti |
| 8e050d1 | Implement new experimental lookup-based matrix
multiplication method(TMAC) (#26695) | vraspar |
| 2d2ba6b | [MLAS/CPU EP] Improve performance of Silu activation path
within the QuickGelu CPU kernel (#26753) | Hariharan Seshadri |
| 1c02b79 | [QNN EP] Add support for handling 0-dimension for Concat
Op (#27000) | Ashwath Shankarnarayan |
| cc2b01b | Fix ClipQuantFusion crash when Clip has multiple input
edges (#27016) | Edward Chen |
| bbd3850 | [QNN EP] Support quantized BatchNorm with per-channel DQ
params on QNN HTP (#26959) | qti-yuduo |
| d8f0318 | Add API to get ep graph partitioning info (#26781) |
Adrian Lizarraga |
| b912b18 | [OVEP] OpenVINO EP Features and bug-fixes for ORT-1.24 -
Follow up (#27007) | Preetha Veeramalai |
| ba11af4 | [QNN-EP] Add MatMulNBits translation for GPU (#26340) |
quic-tirupath |
| c03c419 | [MLAS/NEON] Add dedicated kernel for depthwise
convolution for ARM64 using NEON intrinsics (#26688) | Hariharan
Seshadri |
| e7dfd69 | [QNN-EP] Support alternate Layernorm fusion pattern in
QNN preprocess (#26060) | qti-mattsinc |
| 4013dc1 | Implement multithreading in qgemm_kleidi (#26301) |
Melike Kaptan |
| 9f06181 | [CXX] Enable users to specify custom OrtSyncStream via
RunOptions (#26988) | Dmitri Smirnov |
| cfccd64 | Added support for QMX kernels in MLAS (#26849) |
qti-vaiskv |
| 29d9b2f | Tweak external resource importer handle structs (#27040)
| Scott McKay |
| 9d108d0 | [QNN EP] Add QuickGELU operator support for QNN provider
(#27034) | tirupath-qti |
| b35688f | Add INT2 and UINT2 support for QDQ, transpose and cast
ops (#27022) | vraspar |
| 6d34aba | Introducing BF16 Pointwise NCHWc Convolution for Arm64
(#26838) | Rohanjames1997 |
| 36017ad | [EP ABI] Add CreateCustomOpDomains() API for plugin EP to
register custom ops (#27050) | Chi Lo |
| 50a03e4 | Add a new pipeline for CUDA 13 nuget builds (#27023) |
eserscor |
| a0d4439 | [EP ABI] Update Graph_GetGraphView() implementation
(#26711) | Chi Lo |
| 34bb209 | [webgpu] Fix a bug for im2col (#27069) | Wenqin Yang |
| 46e8d45 | [QNN EP] Add FusedMatMul operator support (#27044) |
tirupath-qti |
| 5e7e7a3 | Disable Float32_2Bits_Asymmetric_256x256 test (#27046) |
vraspar |
| 39f966e | Fix Doxygen documentation build error in
onnxruntime_c_api.h (#27083) | Nick Eubanks |
| 8a7a797 | Print tensor for new packed type of 2 bits (#27064) |
Tianlei Wu |
| 01f40e6 | Fix GPU JAR testing on Linux (#27011) | eserscor |
| b6ed7f3 | Fix warning around ununsed code in QNN Android Emulator
builds by clang (#27026) | Hariharan Seshadri |
| d7daa45 | Raise the timeout for the ios simulator job (#27045) |
Hariharan Seshadri |
| 7e1d818 | upgrade emsdk to 4.0.23 (#27029) | Yulong Wang |
| 347b990 | Fix failing mainline build on Arm64 linux (#27101) |
Rohanjames1997 |
| f481b17 | Add dedicated API to support extracting compatibility
string from model metadata (#27015) | adrastogi |

---------

Signed-off-by: Liqun Fu <[email protected]>
Signed-off-by: bfilipek <[email protected]>
Signed-off-by: dependabot[bot] <[email protected]>
Signed-off-by: Jonathan Clohessy <[email protected]>
Signed-off-by: Christian Bourjau <[email protected]>
Signed-off-by: melkap01 <[email protected]>
Co-authored-by: vraspar <[email protected]>
Co-authored-by: tirupath-qti <[email protected]>
Co-authored-by: Ashwath Shankarnarayan <[email protected]>
Co-authored-by: Liqun Fu <[email protected]>
Co-authored-by: carzh <[email protected]>
Co-authored-by: Hector Li <[email protected]>
Co-authored-by: carzh <[email protected]>
Co-authored-by: Vrajang Parikh <[email protected]>
Co-authored-by: Hariharan Seshadri <[email protected]>
Co-authored-by: Copilot <[email protected]>
Co-authored-by: Edward Chen <[email protected]>
Co-authored-by: Yuduo Wu <[email protected]>
Co-authored-by: Adrian Lizarraga <[email protected]>
Co-authored-by: Preetha Veeramalai <[email protected]>
Co-authored-by: jatinwadhwa921 <[email protected]>
Co-authored-by: jatinwadhwa921 <[email protected]>
Co-authored-by: saurabh <[email protected]>
Co-authored-by: Ankit Maheshkar <[email protected]>
Co-authored-by: sfatimar <[email protected]>
Co-authored-by: Javier Martinez <[email protected]>
Co-authored-by: Bartlomiej Filipek <[email protected]>
Co-authored-by: bopeng1234 <[email protected]>
Co-authored-by: Eric Crawford <[email protected]>
Co-authored-by: MayureshV1 <[email protected]>
Co-authored-by: TejalKhade28 <[email protected]>
Co-authored-by: Vishnudas Thaniel S <[email protected]>
Co-authored-by: Yaru Du <[email protected]>
Co-authored-by: Ryan Metcalfe <[email protected]>
Co-authored-by: Dvoretckii, Mikhail <[email protected]>
Co-authored-by: Pallavi Gupta <[email protected]>
Co-authored-by: Jianhui Dai <[email protected]>
Co-authored-by: Jiajia Qin <[email protected]>
Co-authored-by: Changming Sun <[email protected]>
Co-authored-by: Fei Chen <[email protected]>
Co-authored-by: Yulong Wang <[email protected]>
Co-authored-by: Akupadhye <[email protected]>
Co-authored-by: Wang Ning <[email protected]>
Co-authored-by: Maximilian Müller <[email protected]>
Co-authored-by: Chi Lo <[email protected]>
Co-authored-by: George Wu <[email protected]>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Wanming Lin <[email protected]>
Co-authored-by: quic-calvnguy <[email protected]>
Co-authored-by: Jie Chen <[email protected]>
Co-authored-by: xhcao <[email protected]>
Co-authored-by: Wei-Sheng Chin <[email protected]>
Co-authored-by: quic-hungjuiw <[email protected]>
Co-authored-by: Ian Hunter <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: kunal-vaishnavi <[email protected]>
Co-authored-by: Jeff Kilpatrick <[email protected]>
Co-authored-by: Jeff Kilpatrick <[email protected]>
Co-authored-by: Scott McKay <[email protected]>
Co-authored-by: Nenad Banfic <[email protected]>
Co-authored-by: derdeljan-msft <[email protected]>
Co-authored-by: n1harika <[email protected]>
Co-authored-by: Ryan Metcalfe <[email protected]>
Co-authored-by: Jaswanth Gannamaneni <[email protected]>
Co-authored-by: Klimenko, Mikhail <[email protected]>
Co-authored-by: liang <[email protected]>
Co-authored-by: Garth Long <[email protected]>
Co-authored-by: Jonathan Clohessy <[email protected]>
Co-authored-by: Akshay Sonawane <[email protected]>
Co-authored-by: Christopher Warrington <[email protected]>
Co-authored-by: Ishwar Raut <[email protected]>
Co-authored-by: Gaurav Garg <[email protected]>
Co-authored-by: Xinpeng Dou <[email protected]>
Co-authored-by: adrastogi <[email protected]>
Co-authored-by: Aditya Rastogi <[email protected]>
Co-authored-by: qti-hungjuiw <[email protected]>
Co-authored-by: Pradeep Sakhamoori <[email protected]>
Co-authored-by: Adam Pocock <[email protected]>
Co-authored-by: mingyue <[email protected]>
Co-authored-by: Susanta Bhattacharjee <[email protected]>
Co-authored-by: Jozef Wludzik <[email protected]>
Co-authored-by: Rajeev Sekar <[email protected]>
Co-authored-by: Mayuresh M Varerkar <[email protected]>
Co-authored-by: Copilot <[email protected]>
Co-authored-by: Wenqin Yang <[email protected]>
Co-authored-by: xieofxie <[email protected]>
Co-authored-by: hualxie <[email protected]>
Co-authored-by: Joshua Lochner <[email protected]>
Co-authored-by: Christian Bourjau <[email protected]>
Co-authored-by: Xiaofei Han <[email protected]>
Co-authored-by: Dmitri Smirnov <[email protected]>
Co-authored-by: chunghow-qti <[email protected]>
Co-authored-by: Guenther Schmuelling <[email protected]>
Co-authored-by: Jiawei Shao <[email protected]>
Co-authored-by: czekun <[email protected]>
Co-authored-by: Jaskaran Singh Nagi <[email protected]>
Co-authored-by: quic-tirupath <[email protected]>
Co-authored-by: qti-mattsinc <[email protected]>
Co-authored-by: Melike Kaptan <[email protected]>
Co-authored-by: Damien Dooley <[email protected]>
Co-authored-by: qti-vaiskv <[email protected]>
Co-authored-by: Rohanjames1997 <[email protected]>
Co-authored-by: eserscor <[email protected]>
Co-authored-by: eserscor <[email protected]>
Co-authored-by: Nick Eubanks <[email protected]>
Co-authored-by: adrastogi <[email protected]>
Co-authored-by: Rohanjames1997 <[email protected]>
alex-spacemit pushed a commit to spacemit-com/onnxruntime that referenced this pull request Jan 27, 2026
### Description 
Add support for the FusedMatMul operator in the QNN execution provider.
 FusedMatMul is a contrib operator in the Microsoft domain that performs
a fused matrix multiplication with optional bias addition and
activation.

Implementation details:
- Added FusedMatMulOpBuilder class that decomposes FusedMatMul into:
  1. MatMul operation
  2. Optional bias addition
  3. Optional activation (Relu, Sigmoid, Tanh, Gelu)
- Handles various attributes: transA, transB, alpha, and activation
- Supports higher rank tensors and different data types

Added comprehensive tests:
- Basic functionality tests with various configurations
- Tests for both CPU and HTP backends
- QDQ (Quantize-Dequantize) tests for 8-bit and 16-bit precision


### Motivation and Context
Since QNN HTP doesn't support, decomposing it into QNN HTP supported
operators to improve the inference time of customer models having
FusedMatMul operator.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants